97 research outputs found

    The Right to Explanation, Explained

    Get PDF
    Many have called for algorithmic accountability: laws governing decision-making by complex algorithms, or AI. The EU’s General Data Protection Regulation (GDPR) now establishes exactly this. The recent debate over the right to explanation (a right to information about individual decisions made by algorithms) has obscured the significant algorithmic accountability regime established by the GDPR. The GDPR’s provisions on algorithmic accountability, which include a right to explanation, have the potential to be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive. This Essay clarifies, largely for a U.S. audience, what the GDPR actually requires, incorporating recently released authoritative guidelines

    Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability

    Get PDF
    Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail. In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”). The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime

    Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability

    Get PDF
    Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail. In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”). The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime

    Regulating the Risks of AI

    Get PDF
    Companies and governments now use Artificial Intelligence (“AI”) in a wide range of settings. But using AI leads to well-known risks that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (“EU”) have turned to the tools of risk regulation in governing AI systems. This Article describes the growing convergence around risk regulation in AI governance. It then addresses the question: what does it mean to use risk regulation to govern AI systems? The primary contribution of this Article is to offer an analytic framework for understanding the use of risk regulation as AI governance. It aims to surface the shortcomings of risk regulation as a legal approach, and to enable readers to identify which type of risk regulation is at play in a given law. The theoretical contribution of this Article is to encourage researchers to think about what is gained and what is lost by choosing a particular legal tool for constructing the meaning of AI systems in the law. Whatever the value of using risk regulation, constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Risk regulation tends to try to fix problems with the technology so it may be used, rather than contemplating that it might sometimes not be appropriate to use it at all. Risk regulation works best on quantifiable problems and struggles with hard-toquantify harms. It can cloak what are really policy decisions as technical decisions. Risk regulation typically is not structured to make injured people whole. And the version of risk regulation typically deployed to govern AI systems lacks the feedback loops of tort liability. Thus the choice to use risk regulation in the first place channels the law towards a particular approach to AI governance that makes implicit tradeoffs and carries predictable shortcomings. The second, more granular observation this Article makes is that not all risk regulation is the same. That is, once regulators choose to deploy risk regulation, there are still significant variations in what type of risk regulation they might use. Risk regulation is a legal transplant with multiple possible origins. This Article identifies at least four models for AI risk regulation that meaningfully diverge in how they address accountability

    When the Default Is No Penalty: Negotiating Privacy at the NTIA

    Get PDF
    Consumer privacy protection is largely within the purview of the Federal Trade Commission. In recent years, however, the National Telecommunications and Information Administration (NTIA) at the Department of Commerce has hosted multistakeholder negotiations on consumer privacy issues. The NTIA process has addressed mobile apps, facial recognition, and most recently, drones. It is meant to serve as a venue for industry self-regulation. Drawing on the literature on co-regulation and on penalty defaults, I suggest that the NTIA process struggles to successfully extract industry expertise and participation against a dearth of federal data privacy law and enforcement. This problem is most exacerbated in precisely the areas the NTIA currently addresses: consumer privacy protection around new technologies and practices. In fact, industry may be more likely to see the NTIA process as itself penalty-producing and, thus, be disincentivized from meaningful participation or adoption

    Robots in the Home: What Will We Have Agreed To?

    Get PDF

    Robots in the Home: What Will We Have Agreed To?

    Get PDF
    A new technology can expose the cracks in legal doctrine. Sometimes a technology resists analogy. Sometimes, through analogies, it reveals inconsistencies in the law, or basic flaws in framing, or in the fit between different parts of the legal system. This Essay addresses robots in the home, and what they reveal about U.S. privacy law. Household robots might not themselves uproot U.S. privacy law, but they will reveal its inconsistencies, and show where it is most likely to fracture. Just as drones are serving as a legislative “privacy catalyst” — encouraging the enactment of new privacy laws as people realize they are not legally protected from privacy invasions — household robots may serve as a doctrinal privacy catalyst. This Essay begins by identifying the legally salient features of home robots: the aspects of home robots that will likely drive the most interesting legal questions. It then explores how current privacy law governing both law enforcement and private parties addresses a number of questions raised by home robots. There are two legal puzzles raised — or revealed — by household robots. First, there is the question of whether a robot’s permission to be in a space also grants permission to record information about that space. Second, there is the broader legal question of whether traditional legal protection of the home as a privileged, private space will withstand invasion by digital technology that has permission to be there. This Essay claims that the legally salient aspects of home robots may drive a collision between the legal understanding of privacy in real physical space, and its understanding of privacy in the digital realm. That conflict in turn reveals inconsistent understandings of permission and consent in context, across privacy law

    An Overview and the Evolution of the Anti-Counterfeiting Trade Agreement (ACTA)

    Get PDF
    The Anti-Counterfeiting Trade Agreement (ACTA), a plurilateral intellectual property agreement developed outside of the World Intellectual Property Organization (WIPO) and the World Trade Organization (WTO), represents an attempt to introduce maximalist intellectual property standards in the international sphere, outside of existing institutional checks and balances. ACTA is primarily a copyright treaty, masquerading as a treaty that addresses dangerous medicines and defective imports. The latest ACTA draft, which is the final text available to the public before the signed text is released, contains significant shifts away from earlier draft language towards more moderate language, although it poses the same institutional problems and many of the same substantive problems as the agreement’s earlier incarnations. ACTA will be the new international standard for intellectual property enforcement, and will likely cause legislative changes in countries around the world. This paper compares the December 3, 2010 Text of the Anti-Counterfeiting Trade Agreement (ACTA) to existing international intellectual property law and to a prior draft of ACTA. This paper (1) outlines the scope of ACTA as it is likely to be signed, and (2) preserves the evolution of ACTA’s language for predictive purposes, to better understand the probable parameters of future plurilateral agreements, such as the Trans-Pacific Partnership (TPP) between the United States and other countries, including Australia, Brunei, Chile, Malaysia, New Zealand, and Peru. ACTA’s most significant points of departure from existing international intellectual property law include: (1) expansive coverage of multiple kinds of IP and changes to the international definitions used in the WTO Agreement on Trade Related Aspects of Intellectual Property Law (TRIPS Agreement); (2) the expansion of what constitutes criminal copyright violations; (3) more stringent border measures; (4) mandating closer cooperation between governments and right holders, threatening privacy and co-opting government resources for private-sector benefit; and (5) the creation of a new international institution (an ACTA “Committee”) to address IP enforcement. These changes indicate a push for standardization around a rights regime that may not be appropriate for all countries, endangering existing institutional processes and legitimacy. This paper begins by briefly covering the history of ACTA. It then outlines the scope of the most recent draft, comparing it to existing international intellectual property law. It looks at the scope of definitions and coverage of different rights; civil enforcement, including the language on digital enforcement; criminal enforcement; border measures; international cooperation; and institutional arrangements. The final section then turns to how the language of ACTA has developed. Comparing the current language in ACTA to the language of its previous officially released incarnation in April, 2010 shows the interests that are likely to be raised again in future plurilateral agreements such as the Trans-Pacific Partnership (TPP). Comparisons with the April draft also lend clarity and perspective to the final draft’s vaguer language

    When the Default Is No Penalty: Negotiating Privacy at the NTIA

    Get PDF
    Consumer privacy protection is largely within the purview of the Federal Trade Commission. In recent years, however, the National Telecommunications and Information Administration (NTIA) at the Department of Commerce has hosted multistakeholder negotiations on consumer privacy issues. The NTIA process has addressed mobile apps, facial recognition, and most recently, drones. It is meant to serve as a venue for industry self-regulation. Drawing on the literature on co-regulation and on penalty defaults, I suggest that the NTIA process struggles to successfully extract industry expertise and participation against a dearth of federal data privacy law and enforcement. This problem is most exacerbated in precisely the areas the NTIA currently addresses: consumer privacy protection around new technologies and practices. In fact, industry may be more likely to see the NTIA process as itself penalty-producing and, thus, be disincentivized from meaningful participation or adoption
    • …
    corecore